首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   269617篇
  免费   13409篇
  国内免费   965篇
耳鼻咽喉   3173篇
儿科学   8311篇
妇产科学   6462篇
基础医学   38734篇
口腔科学   8709篇
临床医学   19513篇
内科学   59673篇
皮肤病学   7235篇
神经病学   23971篇
特种医学   6637篇
外国民族医学   2篇
外科学   28468篇
综合类   1205篇
一般理论   80篇
预防医学   30048篇
眼科学   6103篇
药学   20319篇
  1篇
中国医学   1024篇
肿瘤学   14323篇
  2023年   1791篇
  2022年   1377篇
  2021年   5829篇
  2020年   3797篇
  2019年   6095篇
  2018年   8557篇
  2017年   5794篇
  2016年   6182篇
  2015年   6835篇
  2014年   8331篇
  2013年   11775篇
  2012年   19028篇
  2011年   19905篇
  2010年   10324篇
  2009年   8191篇
  2008年   16085篇
  2007年   16740篇
  2006年   15758篇
  2005年   14970篇
  2004年   13756篇
  2003年   12717篇
  2002年   11687篇
  2001年   5467篇
  2000年   5835篇
  1999年   4721篇
  1998年   1362篇
  1997年   991篇
  1995年   890篇
  1992年   2529篇
  1991年   2206篇
  1990年   2214篇
  1989年   1897篇
  1988年   1806篇
  1987年   1682篇
  1986年   1722篇
  1985年   1588篇
  1984年   1185篇
  1983年   1040篇
  1979年   1292篇
  1978年   905篇
  1975年   934篇
  1974年   1182篇
  1973年   1219篇
  1972年   1151篇
  1971年   1110篇
  1970年   1032篇
  1969年   1107篇
  1968年   1122篇
  1967年   1001篇
  1966年   897篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
91.
92.
93.
BackgroundAcute appendicitis (AA) is one of the most frequent surgical pathologies in pediatrics.ObjectivesTo investigate the utility of proadrenomedullin (pro-ADM) for the diagnosis of AA.MethodsProspective, analytical, observational, and multicenter study conducted in 6 pediatric emergency departments. Children up to 18 years of age with suspected AA were included. Clinical, epidemiological, and analytical data were collected.ResultsWe studied 285 children with an average age of 9.5 years (95% confidence interval [CI], 9.1–9.9). AA was diagnosed in 103 children (36.1%), with complications in 10 of them (9.7%). The mean concentration of pro-ADM (nmol/L) was higher in children with AA (0.51 nmol/L, SD 0.16) than in children with acute abdominal pain (AAP) of another etiology (0.44 nmol/L, SD 0.14; p < 0.001). This difference was greater in complicated cases compared with uncomplicated AA (0.64 nmol/L, SD 0.17 and 0.50 nmol/L, SD 0.15, respectively; p = 0.005). The areas under the receiver-operating characteristic curves were 0.66 (95% CI, 0.59–0.72) for pro-ADM, 0.70 (95% CI, 0.63–0.76) for C-reactive protein (CRP), 0.84 (95% CI, 0.79–0.89) for neutrophils, and 0.84 (95% CI, 0.79–0.89) for total leukocytes. The most reliable combination to rule out AA was CRP ≤1.25 mg/dL and pro-ADM ≤0.35 nmol/L with a sensitivity of 96% and a negative predictive value of 93%.ConclusionChildren with AA presented higher pro-ADM values than children with AAP of other etiologies, especially in cases of complicated AA. The combination of low values of pro-ADM and CRP can help to select children with low risk of AA.  相似文献   
94.

Background and Aims

T-tube placement during choledochocholedochostomy (CCS) associated with liver transplantation (LT) remains controversial. This study was designed to validate the results of an earlier prospective randomized controlled trial (RCT) on use versus nonuse of the T-tube during CCS associated with LT.

Methods

Prospective cohort study. The primary outcome was the overall incidence of biliary complications (BCs).

Results

In total, 405 patients were included, and the median overall monitoring period was 29 months (interquartile range: 13–47 months). Selective use of the T-tube reduced BCs (23% vs 13%; P = .003), of which 75% were type IIIa or less in the Clavien-Dindo classification. The overall BC rate did not differ between patients with versus without T-tube placement.

Conclusions

We confirmed that selective use of a rubber T-tube during CCS associated with LT, following the principles established in our prospective RCT, reduced the rate of BC by 10% without detriment, even after enrolling patients at an a priori greater risk of BCs than were the RCT patients.  相似文献   
95.
96.
The Lapidus bunionectomy is performed to treat hallux valgus. Recurrence of the deformity remains a concern. A transverse intermetatarsal screw spanning the base of the first metatarsal to the base of the second can increase stability. The neurovascular bundle is located within the proximity of this screw. In this study, we assessed the structures at risks with the use of this technique. In 10 specimens, a guide wire was placed, and a 4.0-mm cannulated screw was inserted. The neurovascular bundle was dissected and inspected for direct trauma to the neurovascular bundle, and the proximity of the screw was measured using a digital caliper. Ten cadaveric specimens were used. The dorsalis pedis artery and deep peroneal nerve were free from injury in 9 of 10 specimens. In those 9 specimens, the neurovascular bundle was located dorsal in relation to the screw. The mean distance of the screw to the neurovascular bundle was 7.1 ± 3.3 mm. The mean distance from the screw to the first tarsometatarsal joint (TMTJ) was 14.7 ± 4.3 mm. The mean distance from the screw as it entered the second metatarsal to the second TMTJ was 18.0 ± 7.2 mm. In 1 specimen, the screw was found to be traversing through the neurovascular bundle. The distance from the screw to the first TMTJ was 15.0 mm. The distance of the screw from where it entered the second metatarsal to the second TMTJ was 24.0 mm. Although the intermetatarsal screw avoided the neurovascular cases in most instances, there is some anatomic risk to the neurovascular bundle. Further study is warranted to evaluate clinical results using the intermetatarsal screw for the modified Lapidus procedure.  相似文献   
97.
Darwinian evolution tends to produce energy-efficient outcomes. On the other hand, energy limits computation, be it neural and probabilistic or digital and logical. Taking a particular energy-efficient viewpoint, we define neural computation and make use of an energy-constrained computational function. This function can be optimized over a variable that is proportional to the number of synapses per neuron. This function also implies a specific distinction between adenosine triphosphate (ATP)-consuming processes, especially computation per se vs. the communication processes of action potentials and transmitter release. Thus, to apply this mathematical function requires an energy audit with a particular partitioning of energy consumption that differs from earlier work. The audit points out that, rather than the oft-quoted 20 W of glucose available to the human brain, the fraction partitioned to cortical computation is only 0.1 W of ATP [L. Sokoloff, Handb. Physiol. Sect. I Neurophysiol. 3, 1843–1864 (1960)] and [J. Sawada, D. S. Modha, “Synapse: Scalable energy-efficient neurosynaptic computing” in Application of Concurrency to System Design (ACSD) (2013), pp. 14–15]. On the other hand, long-distance communication costs are 35-fold greater, 3.5 W. Other findings include 1) a 108-fold discrepancy between biological and lowest possible values of a neuron’s computational efficiency and 2) two predictions of N, the number of synaptic transmissions needed to fire a neuron (2,500 vs. 2,000).

The purpose of the brain is to process information, but that leaves us with the problem of finding appropriate definitions of information processing. We assume that given enough time and given a sufficiently stable environment (e.g., the common internals of the mammalian brain), then Nature’s constructions approach an optimum. The problem is to find which function or combined set of functions is optimal when incorporating empirical values into these function(s). The initial example in neuroscience is ref. 1, which shows that information capacity is far from optimized, especially in comparison to the optimal information per joule which is in much closer agreement with empirical values. Whenever we find such an agreement between theory and experiment, we conclude that this optimization, or near optimization, is Nature’s perspective. Using this strategy, we and others seek quantified relationships with particular forms of information processing and require that these relationships are approximately optimal (17). At the level of a single neuron, a recent theoretical development identifies a potentially optimal computation (8). To apply this conjecture requires understanding certain neuronal energy expenditures. Here the focus is on the energy budget of the human cerebral cortex and its primary neurons. The energy audit here differs from the premier earlier work (9) in two ways: The brain considered here is human not rodent, and the audit here uses a partitioning motivated by the information-efficiency calculations rather than the classical partitions of cell biology and neuroscience (9). Importantly, our audit reveals greater energy use by communication than by computation. This observation in turn generates additional insights into the optimal synapse number. Specifically, the bits per joule optimized computation must provide sufficient bits per second to the axon and presynaptic mechanism to justify the great expense of timely communication. Simply put from the optimization perspective, we assume evolution would not build a costly communication system and then not supply it with appropriate bits per second to justify its costs. The bits per joule are optimized with respect to N, the number of synaptic activations per interpulse interval (IPI) for one neuron, where N happens to equal the number of synapses per neuron times the success rate of synaptic transmission (below).To measure computation, and to partition out its cost, requires a suitable definition at the single-neuron level. Rather than the generic definition “any signal transformation” (3) or the neural-like “converting a multivariate signal to a scalar signal,” we conjecture a more detailed definition (8). To move toward this definition, note two important brain functions: estimating what is present in the sensed world and predicting what will be present, including what will occur as the brain commands manipulations. Then, assume that such macroscopic inferences arise by combining single-neuron inferences. That is, conjecture a neuron performing microscopic estimation or prediction. Instead of sensing the world, a neuron’s sensing is merely its capacitive charging due to recently active synapses. Using this sampling of total accumulated charge over a particular elapsed time, a neuron implicitly estimates the value of its local latent variable, a variable defined by evolution and developmental construction (8). Applying an optimization perspective, which includes implicit Bayesian inference, a sufficient statistic, and maximum-likelihood unbiasedness, as well as energy costs (8), produces a quantified theory of single-neuron computation. This theory implies the optimal IPI probability distribution. Motivating IPI coding is this fact: The use of constant amplitude signaling, e.g., action potentials, implies that all information can only be in IPIs. Therefore, no code can outperform an IPI code, and it can equal an IPI code in bit rate only if it is one to one with an IPI code. In neuroscience, an equivalent to IPI codes is the instantaneous rate code where each message is IPI1. In communication theory, a discrete form of IPI coding is called differential pulse position modulation (10); ref. 11 explicitly introduced a continuous form of this coding as a neuron communication hypothesis, and it receives further development in ref. 12.Results recall and further develop earlier work concerning a certain optimization that defines IPI probabilities (8). An energy audit is required to use these developments. Combining the theory with the audit leads to two outcomes: 1) The optimizing N serves as a consistency check on the audit and 2) future energy audits for individual cell types will predict N for that cell type, a test of the theory. Specialized approximations here that are not present in earlier work (9) include the assumptions that 1) all neurons of cortex are pyramidal neurons, 2) pyramidal neurons are the inputs to pyramidal neurons, 3) a neuron is under constant synaptic bombardment, and 4) a neuron’s capacitance must be charged 16 mV from reset potential to threshold to fire.Following the audit, the reader is given a perspective that may be obvious to some, but it is rarely discussed and seemingly contradicts the engineering literature (but see ref. 6). In particular, a neuron is an incredibly inefficient computational device in comparison to an idealized physical analog. It is not just a few bits per joule away from optimal predicted by the Landauer limit, but off by a huge amount, a factor of 108. The theory here resolves the efficiency issue using a modified optimization perspective. Activity-dependent communication and synaptic modification costs force upward optimal computational costs. In turn, the bit value of the computational energy expenditure is constrained to a central limit like the result: Every doubling of N can produce no more than 0.5 bits. In addition to 1) explaining the 108 excessive energy use, other results here include 2) identifying the largest “noise” source limiting computation, which is the signal itself, and 3) partitioning the relevant costs, which may help engineers redirect focus toward computation and communication costs rather than the 20-W total brain consumption as their design goal.  相似文献   
98.
99.
100.
IntroductionScales for predicting venous thromboembolism (VTE) recurrence are useful for deciding the duration of the anticoagulant treatment. Although there are several scales, the most appropriate for our setting has not been identified. For this reason, we aimed to validate the DASH prediction score and the Vienna nomogram at 12 months.MethodsThis was a retrospective study of unselected consecutive VTE patients seen between 2006 and 2014. We compared the ability of the DASH score and the Vienna nomogram to predict recurrences of VTE. The validation was performed by stratifying patients as low-risk or high-risk, according to each scale (discrimination) and comparing the observed recurrence with the expected rate (calibration).ResultsOf 353 patients evaluated, 195 were analyzed, with an average age of 53.5 ± 19 years. There were 21 recurrences in 1 year (10.8%, 95% CI: 6.8%-16%). According to the DASH score, 42% were classified as low risk, and the rate of VTE recurrence in this group was 4.9% (95% CI: 1.3%-12%) vs. the high-risk group that was 15% (95% CI: 9%-23%) (p <.05). According to the Vienna nomogram, 30% were classified as low risk, and the rate of VTE recurrence in the low risk group vs. the high risk group was 4.2% (95% CI:0.5%-14%) vs. 16.2% (95% CI: 9.9%-24.4%) (p <.05).ConclusionsOur study validates the DASH score and the Vienna nomogram in our population. The DASH prediction score may be the most advisable, both because of its simplicity and its ability to identify more low-risk patients than the Vienna nomogram (42% vs. 30%).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号